眼睛的临床诊断是对多种数据模式进行的,包括标量临床标签,矢量化生物标志物,二维底面图像和三维光学相干性层析成像(OCT)扫描。临床从业者使用所有可用的数据模式来诊断和治疗糖尿病性视网膜病(DR)或糖尿病黄斑水肿(DME)等眼部疾病。在眼科医学领域启用机器学习算法的使用需要研究治疗期内所有相关数据之间的关系和相互作用。现有的数据集受到限制,因为它们既不提供数据,也没有考虑数据模式之间的显式关系建模。在本文中,我们介绍了用于研究以上限制的视觉眼睛语义(橄榄)数据集的眼科标签。这是第一个OCT和近IIR眼底数据集,其中包括临床标签,生物标记标签,疾病标签和时间序列的患者治疗信息,来自相关临床试验。该数据集由1268个近红外图像组成,每个图像至少具有49个10月扫描和16个生物标志物,以及4个临床标签和DR或DME的疾病诊断。总共有96张眼睛的数据在至少两年的时间内平均,每只眼睛平均治疗66周和7次注射。我们在医学图像分析中为橄榄数据集进行了橄榄数据集的实用性,并为核心和新兴机器学习范式提供了基准和具体研究方向。
translated by 谷歌翻译
Are extralinguistic signals such as image pixels crucial for inducing constituency grammars? While past work has shown substantial gains from multimodal cues, we investigate whether such gains persist in the presence of rich information from large language models (LLMs). We find that our approach, LLM-based C-PCFG (LC-PCFG), outperforms previous multi-modal methods on the task of unsupervised constituency parsing, achieving state-of-the-art performance on a variety of datasets. Moreover, LC-PCFG results in an over 50% reduction in parameter count, and speedups in training time of 1.7x for image-aided models and more than 5x for video-aided models, respectively. These results challenge the notion that extralinguistic signals such as image pixels are needed for unsupervised grammar induction, and point to the need for better text-only baselines in evaluating the need of multi-modality for the task.
translated by 谷歌翻译
We introduce Structured 3D Features, a model based on a novel implicit 3D representation that pools pixel-aligned image features onto dense 3D points sampled from a parametric, statistical human mesh surface. The 3D points have associated semantics and can move freely in 3D space. This allows for optimal coverage of the person of interest, beyond just the body shape, which in turn, additionally helps modeling accessories, hair, and loose clothing. Owing to this, we present a complete 3D transformer-based attention framework which, given a single image of a person in an unconstrained pose, generates an animatable 3D reconstruction with albedo and illumination decomposition, as a result of a single end-to-end model, trained semi-supervised, and with no additional postprocessing. We show that our S3F model surpasses the previous state-of-the-art on various tasks, including monocular 3D reconstruction, as well as albedo and shading estimation. Moreover, we show that the proposed methodology allows novel view synthesis, relighting, and re-posing the reconstruction, and can naturally be extended to handle multiple input images (e.g. different views of a person, or the same view, in different poses, in video). Finally, we demonstrate the editing capabilities of our model for 3D virtual try-on applications.
translated by 谷歌翻译
The need for data privacy and security -- enforced through increasingly strict data protection regulations -- renders the use of healthcare data for machine learning difficult. In particular, the transfer of data between different hospitals is often not permissible and thus cross-site pooling of data not an option. The Personal Health Train (PHT) paradigm proposed within the GO-FAIR initiative implements an 'algorithm to the data' paradigm that ensures that distributed data can be accessed for analysis without transferring any sensitive data. We present PHT-meDIC, a productively deployed open-source implementation of the PHT concept. Containerization allows us to easily deploy even complex data analysis pipelines (e.g, genomics, image analysis) across multiple sites in a secure and scalable manner. We discuss the underlying technological concepts, security models, and governance processes. The implementation has been successfully applied to distributed analyses of large-scale data, including applications of deep neural networks to medical image data.
translated by 谷歌翻译
Enhancing resilience in distributed networks in the face of malicious agents is an important problem for which many key theoretical results and applications require further development and characterization. This work focuses on the problem of distributed optimization in multi-agent cyberphysical systems, where a legitimate agent's dynamic is influenced both by the values it receives from potentially malicious neighboring agents, and by its own self-serving target function. We develop a new algorithmic and analytical framework to achieve resilience for the class of problems where stochastic values of trust between agents exist and can be exploited. In this case we show that convergence to the true global optimal point can be recovered, both in mean and almost surely, even in the presence of malicious agents. Furthermore, we provide expected convergence rate guarantees in the form of upper bounds on the expected squared distance to the optimal value. Finally, we present numerical results that validate the analytical convergence guarantees we present in this paper even when the malicious agents compose the majority of agents in the network.
translated by 谷歌翻译
We derive a learning framework to generate routing/pickup policies for a fleet of vehicles tasked with servicing stochastically appearing requests on a city map. We focus on policies that 1) give rise to coordination amongst the vehicles, thereby reducing wait times for servicing requests, 2) are non-myopic, considering a-priori unknown potential future requests, and 3) can adapt to changes in the underlying demand distribution. Specifically, we are interested in adapting to fluctuations of actual demand conditions in urban environments, such as on-peak vs. off-peak hours. We achieve this through a combination of (i) online play, a lookahead optimization method that improves the performance of rollout methods via an approximate policy iteration step, and (ii) an offline approximation scheme that allows for adapting to changes in the underlying demand model. In particular, we achieve adaptivity of our learned policy to different demand distributions by quantifying a region of validity using the q-valid radius of a Wasserstein Ambiguity Set. We propose a mechanism for switching the originally trained offline approximation when the current demand is outside the original validity region. In this case, we propose to use an offline architecture, trained on a historical demand model that is closer to the current demand in terms of Wasserstein distance. We learn routing and pickup policies over real taxicab requests in downtown San Francisco with high variability between on-peak and off-peak hours, demonstrating the ability of our method to adapt to real fluctuation in demand distributions. Our numerical results demonstrate that our method outperforms rollout-based reinforcement learning, as well as several benchmarks based on classical methods from the field of operations research.
translated by 谷歌翻译
In this paper, we present the Multi-view Extended Videos with Identities (MEVID) dataset for large-scale, video person re-identification (ReID) in the wild. To our knowledge, MEVID represents the most-varied video person ReID dataset, spanning an extensive indoor and outdoor environment across nine unique dates in a 73-day window, various camera viewpoints, and entity clothing changes. Specifically, we label the identities of 158 unique people wearing 598 outfits taken from 8, 092 tracklets, average length of about 590 frames, seen in 33 camera views from the very large-scale MEVA person activities dataset. While other datasets have more unique identities, MEVID emphasizes a richer set of information about each individual, such as: 4 outfits/identity vs. 2 outfits/identity in CCVID, 33 viewpoints across 17 locations vs. 6 in 5 simulated locations for MTA, and 10 million frames vs. 3 million for LS-VID. Being based on the MEVA video dataset, we also inherit data that is intentionally demographically balanced to the continental United States. To accelerate the annotation process, we developed a semi-automatic annotation framework and GUI that combines state-of-the-art real-time models for object detection, pose estimation, person ReID, and multi-object tracking. We evaluate several state-of-the-art methods on MEVID challenge problems and comprehensively quantify their robustness in terms of changes of outfit, scale, and background location. Our quantitative analysis on the realistic, unique aspects of MEVID shows that there are significant remaining challenges in video person ReID and indicates important directions for future research.
translated by 谷歌翻译
我们为对抗性多机器人群众跨任务中的决策制定开发了一个有弹性的二进制假设测试框架。该框架利用机器人之间的随机信任观察,以在集中式融合中心(FC)中得出可进行的弹性决策,即使I)在网络中存在恶意机器人,其数量可能大于合法机器人的数量,并且II )FC使用所有机器人的一次性噪声测量。我们得出两种算法来实现这一目标。第一个是两个阶段方法(2SA),该方法基于收到的信任观察估算机器人的合法性,并证明在最严重的恶意攻击中可最大程度地减少检测错误的可能性。在这里,恶意机器人的比例是已知但任意的。对于不明的恶意机器人,我们开发了对抗性的广义似然比测试(A-GLRT),该测试(A-GLRT)都使用报告的机器人测量和信任观察来估计机器人的可信赖性,其报告策略以及同时的正确假设。我们利用特殊的问题结构表明,尽管有几个未知的问题参数,但这种方法仍然可以计算处理。我们在硬件实验中部署了这两种算法,其中一组机器人会在模拟道路网络上进行交通状况的人群,但仍会受到SYBIL攻击的方式。我们从实际通信信号中提取每个机器人的信任观察结果,这些信号提供有关发件人独特性的统计信息。我们表明,即使恶意机器人在大多数情况下,FC也可以将检测误差的可能性降低到2SA和A-GLRT的30.5%和29%。
translated by 谷歌翻译
多限制攀岩机器人的运动计划必须考虑机器人的姿势,联合扭矩,以及它如何使用接触力与环境相互作用。本文着重于使用非传统运动来探索不可预测的环境(例如火星洞穴)的机器人运动计划。我们的机器人概念Reachbot使用可扩展和可伸缩的动臂作为四肢,在攀爬时实现了大型可伸缩度工作区。每个可扩展的动臂都由旨在抓住岩石表面的微生物抓地力封顶。 Reachbot利用其大型工作空间来绕过障碍物,裂缝和挑战地形。我们的计划方法必须具有多功能性,以适应可变的地形特征和鲁棒性,以减轻用刺抓握随机性质的风险。在本文中,我们引入了一种图形遍历算法,以根据适用于握把的可用地形特征选择一个离散的grasps序列。该离散的计划是由一个解耦运动计划者互补的,该计划者使用基于抽样的计划和顺序凸面编程的组合来考虑身体运动和最终效应器运动的交替阶段,以优化单个阶段。我们使用运动规划师在模拟的2D洞穴环境中计划轨迹,至少有95%的成功概率,并在基线轨迹上表现出改善的鲁棒性。最后,我们通过对2D平面原型进行实验来验证运动计划算法。
translated by 谷歌翻译
深度学习算法的最新进展为解决许多医学图像分析问题带来了重大好处。培训深度学习模型通常需要具有专家标记注释的大型数据集。但是,获取专家标记的注释不仅昂贵,而且主观,容易出错,并且观察者内部变异性会引入标签。由于解剖学的模棱两可,使用深度学习模型来细分医学图像时,这尤其是一个问题。基于图像的医学诊断工具使用经过不正确分段标签训练的深度学习模型可以导致错误的诊断和治疗建议。与单评论注释相比,多评价者注释可能更适合于使用小型培训集的深度学习模型进行训练。本文的目的是开发和评估一种基于MRI中病变特征的多评价者注释和解剖学知识来生成概率标签的方法,以及一种使用概率的标签使用归一化活动性损失作为A的病变特征的解剖学知识,以训练分割模型”。耐噪声损失的功能。通过将17个膝盖MRI扫描的二进制基础真理进行比较,以评估该模型,以用于临床分割和检测骨髓病变(BML)。该方法与二进制跨透镜损失函数相比,该方法成功提高了精度14,召回22和骰子得分8%。总体而言,这项工作的结果表明,使用软标签的拟议归一化主动损失成功地减轻了嘈杂标签的影响。
translated by 谷歌翻译